10 research outputs found

    Reconstituting typeset Marriage Registers using simple software tools

    Get PDF
    In a world of fully integrated software applications, which can seem daunting to develop and to maintain, it is sometimes useful to recall that a system of loosely-linked software components can provide surprisingly powerful and flexible methods for software development. This paper describes a project which aims to retypeset a series of volumes from the Phillimore Marriage Registers, first published in England around the turn of the last century. The source material is plain text derived from running Optical Character Recognition (OCR) on a set of page scans taken from the original printed volumes. The regular, tabular, structure of the Register pages allows us to automate the re-typesetting process. The UNIX troff software and its tbl preprocessor are used for the typesetting itself, but a series of simple awk-based software tools, all of them parsers and code generators of one sort or another, is used to bring about the OCR-to-troff transformation. By re-parsing the generated troff codes it is possible to produce a surname index as a supplement to the retypeset volume. Moreover, this second-stage parsing has been invaluable in discovering subtle ‘typos’ in the automatically generated material. With small adjustments to this parser it would be possible to output the complete marriage entries in standard XML or GEDCOM notations

    A mathematical and computational review of Hartree-Fock SCF methods in Quantum Chemistry

    Get PDF
    We present here a review of the fundamental topics of Hartree-Fock theory in Quantum Chemistry. From the molecular Hamiltonian, using and discussing the Born-Oppenheimer approximation, we arrive to the Hartree and Hartree-Fock equations for the electronic problem. Special emphasis is placed in the most relevant mathematical aspects of the theoretical derivation of the final equations, as well as in the results regarding the existence and uniqueness of their solutions. All Hartree-Fock versions with different spin restrictions are systematically extracted from the general case, thus providing a unifying framework. Then, the discretization of the one-electron orbitals space is reviewed and the Roothaan-Hall formalism introduced. This leads to a exposition of the basic underlying concepts related to the construction and selection of Gaussian basis sets, focusing in algorithmic efficiency issues. Finally, we close the review with a section in which the most relevant modern developments (specially those related to the design of linear-scaling methods) are commented and linked to the issues discussed. The whole work is intentionally introductory and rather self-contained, so that it may be useful for non experts that aim to use quantum chemical methods in interdisciplinary applications. Moreover, much material that is found scattered in the literature has been put together here to facilitate comprehension and to serve as a handy reference.Comment: 64 pages, 3 figures, tMPH2e.cls style file, doublesp, mathbbol and subeqn package

    Econometric Forecasting

    No full text
    Several principles are useful for econometric forecasters: keep the model simple, use all the data you can get, and use theory (not the data) as a guide to selecting causal variables. But theory gives little guidance on dynamics, that is, on which lagged values of the selected variables to use. Early econometric models failed in comparison with extrapolative methods because they paid too little attention to dynamic structure. In a fairly simple way, the vector autoregression (VAR) approach that first appeared in the 1980s resolved the problem by shifting emphasis towards dynamics and away from collecting many causal variables. The VAR approach also resolves the question of how to make long-term forecasts where the causal variables themselves must be forecast. When the analyst does not need to forecast causal variables or can use other sources, he or she can use a single equation with the same dynamic structure. Ordinary least squares is a perfectly adequate estimation method. Evidence supports estimating the initial equation in levels, whether the variables are stationary or not. We recommend a general-to-specific model-building strategy: start with a large number of lags in the initial estimation, although simplifying by reducing the number of lags pays off. Evidence on the value of further simplification is mixed. If cointegration among variables, then error-correction models (ECMs) will do worse than equations in levels. But ECMs are only sometimes an improvement eve

    A review of health care models for coronary heart disease interventions

    No full text
    This article reviews models for the treatment of coronary heart disease (CHD). Whereas most of the models described were developed to assess the cost effectiveness of different treatment strategies, other models have also been used to extrapolate clinical trials, for capacity and resource planning, or to predict the future population with heart disease. In this paper we investigate the use of modelling techniques in relation to different types of health intervention, and we discuss the assumptions and limitations of these approaches. Many of the models reviewed in this paper use decision tree models for acute or short term interventions, and Markov or state transition models for chronic or long term interventions. Discrete event simulation has, however, been used for more complex whole system models, and for modelling resource-constrained interventions and operational planning. Nearly all of the studies in our review used cohort-based models rather than population based models, and therefore few models could estimate the likely total costs and benefits for a population group. Most studies used de novo purpose built models consisting of only a small number of health states. Models of the whole disease system were less common. The model descriptions were often incomplete. We recommend that the reporting of model structure, assumptions and input parameters is more explicit, to reduce the risk of biased reporting and ensure greater confidence in the model results

    Long-term (180-Day) outcomes in critically Ill patients with COVID-19 in the REMAP-CAP randomized clinical trial

    No full text
    Importance The longer-term effects of therapies for the treatment of critically ill patients with COVID-19 are unknown. Objective To determine the effect of multiple interventions for critically ill adults with COVID-19 on longer-term outcomes. Design, Setting, and Participants Prespecified secondary analysis of an ongoing adaptive platform trial (REMAP-CAP) testing interventions within multiple therapeutic domains in which 4869 critically ill adult patients with COVID-19 were enrolled between March 9, 2020, and June 22, 2021, from 197 sites in 14 countries. The final 180-day follow-up was completed on March 2, 2022. Interventions Patients were randomized to receive 1 or more interventions within 6 treatment domains: immune modulators (n = 2274), convalescent plasma (n = 2011), antiplatelet therapy (n = 1557), anticoagulation (n = 1033), antivirals (n = 726), and corticosteroids (n = 401). Main Outcomes and Measures The main outcome was survival through day 180, analyzed using a bayesian piecewise exponential model. A hazard ratio (HR) less than 1 represented improved survival (superiority), while an HR greater than 1 represented worsened survival (harm); futility was represented by a relative improvement less than 20% in outcome, shown by an HR greater than 0.83. Results Among 4869 randomized patients (mean age, 59.3 years; 1537 [32.1%] women), 4107 (84.3%) had known vital status and 2590 (63.1%) were alive at day 180. IL-6 receptor antagonists had a greater than 99.9% probability of improving 6-month survival (adjusted HR, 0.74 [95% credible interval {CrI}, 0.61-0.90]) and antiplatelet agents had a 95% probability of improving 6-month survival (adjusted HR, 0.85 [95% CrI, 0.71-1.03]) compared with the control, while the probability of trial-defined statistical futility (HR >0.83) was high for therapeutic anticoagulation (99.9%; HR, 1.13 [95% CrI, 0.93-1.42]), convalescent plasma (99.2%; HR, 0.99 [95% CrI, 0.86-1.14]), and lopinavir-ritonavir (96.6%; HR, 1.06 [95% CrI, 0.82-1.38]) and the probabilities of harm from hydroxychloroquine (96.9%; HR, 1.51 [95% CrI, 0.98-2.29]) and the combination of lopinavir-ritonavir and hydroxychloroquine (96.8%; HR, 1.61 [95% CrI, 0.97-2.67]) were high. The corticosteroid domain was stopped early prior to reaching a predefined statistical trigger; there was a 57.1% to 61.6% probability of improving 6-month survival across varying hydrocortisone dosing strategies. Conclusions and Relevance Among critically ill patients with COVID-19 randomized to receive 1 or more therapeutic interventions, treatment with an IL-6 receptor antagonist had a greater than 99.9% probability of improved 180-day mortality compared with patients randomized to the control, and treatment with an antiplatelet had a 95.0% probability of improved 180-day mortality compared with patients randomized to the control. Overall, when considered with previously reported short-term results, the findings indicate that initial in-hospital treatment effects were consistent for most therapies through 6 months
    corecore